38 research outputs found

    Multimodal Grounding for Language Processing

    Get PDF

    Knowledge Questions from Knowledge Graphs

    No full text
    We address the novel problem of automatically generating quiz-style knowledge questions from a knowledge graph such as DBpedia. Questions of this kind have ample applications, for instance, to educate users about or to evaluate their knowledge in a specific domain. To solve the problem, we propose an end-to-end approach. The approach first selects a named entity from the knowledge graph as an answer. It then generates a structured triple-pattern query, which yields the answer as its sole result. If a multiple-choice question is desired, the approach selects alternative answer options. Finally, our approach uses a template-based method to verbalize the structured query and yield a natural language question. A key challenge is estimating how difficult the generated question is to human users. To do this, we make use of historical data from the Jeopardy! quiz show and a semantically annotated Web-scale document collection, engineer suitable features, and train a logistic regression classifier to predict question difficulty. Experiments demonstrate the viability of our overall approach

    The peptide agonist-binding site of the glucagon-like peptide-1 (GLP-1) receptor based on site-directed mutagenesis and knowledge-based modelling

    Get PDF
    Glucagon-like peptide-1 (7–36)amide (GLP-1) plays a central role in regulating blood sugar levels and its receptor, GLP-1R, is a target for anti-diabetic agents such as the peptide agonist drugs exenatide and liraglutide. In order to understand the molecular nature of the peptide–receptor interaction, we used site-directed mutagenesis and pharmacological profiling to highlight nine sites as being important for peptide agonist binding and/or activation. Using a knowledge-based approach, we constructed a 3D model of agonist-bound GLP-1R, basing the conformation of the N-terminal region on that of the receptor-bound NMR structure of the related peptide pituitary adenylate cyclase-activating protein (PACAP21). The relative position of the extracellular to the transmembrane (TM) domain, as well as the molecular details of the agonist-binding site itself, were found to be different from the model that was published alongside the crystal structure of the TM domain of the glucagon receptor, but were nevertheless more compatible with published mutagenesis data. Furthermore, the NMR-determined structure of a high-potency cyclic conformationally-constrained 11-residue analogue of GLP-1 was also docked into the receptor-binding site. Despite having a different main chain conformation to that seen in the PACAP21 structure, four conserved residues (equivalent to His-7, Glu-9, Ser-14 and Asp-15 in GLP-1) could be structurally aligned and made similar interactions with the receptor as their equivalents in the GLP-1-docked model, suggesting the basis of a pharmacophore for GLP-1R peptide agonists. In this way, the model not only explains current mutagenesis and molecular pharmacological data but also provides a basis for further experimental design

    Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze

    No full text
    When speakers describe an image, they tend to look at objects before mentioning them. In this paper, we investigate such sequential cross-modal alignment by modelling the image description generation process computationally. We take as our starting point a state-of-the-art image captioning system and develop several model variants that exploit information from human gaze patterns recorded during language production. In particular, we propose the first approach to image description generation where visual processing is modelled sequentially. Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention and shed light on human cognitive processes by comparing different ways of aligning the gaze modality with language production. We find that processing gaze data sequentially leads to descriptions that are better aligned to those produced by speakers, more diverse, and more natural—particularly when gaze is encoded with a dedicated recurrent component
    corecore